425 research outputs found

    Efficient Traffic Flow Measurement for ISP Networks

    Get PDF
    International audienceTraffic flow measurement is of great importance to ISPs for various network engineering tasks. However, directly measuring traffic flows of the whole ISP network is greatly time and cost-consuming. An interesting problem is that how one can obtain the traffic flows of the whole ISP network by just monitoring a small fraction of links. Previous works view the problem as Vertex Cover problem. They suffer from high time complexity. Besides, using these methods, the monitoring of some links is redundant. Different from these works, we study the problem from the perspective of edges and propose two models. The first model, Extended Edge Cover model, is based on the key observation of flow conservation. This method can determine the minimum set of monitored links, which are 30% less than that of previous works. The second model, shared-path model, utilizes the routing information and topological property of the network. It is more suitable when the monitoring resources are limited but one still wants to measure a large part of the networks. Using this method, one can measure 85% of the network by monitoring 5% of links. Finally, we evaluate the performance of the two models through extensive simulations. The experimental results show the effectiveness and robustness of the two models

    Exploring and Exploiting Uncertainty for Incomplete Multi-View Classification

    Full text link
    Classifying incomplete multi-view data is inevitable since arbitrary view missing widely exists in real-world applications. Although great progress has been achieved, existing incomplete multi-view methods are still difficult to obtain a trustworthy prediction due to the relatively high uncertainty nature of missing views. First, the missing view is of high uncertainty, and thus it is not reasonable to provide a single deterministic imputation. Second, the quality of the imputed data itself is of high uncertainty. To explore and exploit the uncertainty, we propose an Uncertainty-induced Incomplete Multi-View Data Classification (UIMC) model to classify the incomplete multi-view data under a stable and reliable framework. We construct a distribution and sample multiple times to characterize the uncertainty of missing views, and adaptively utilize them according to the sampling quality. Accordingly, the proposed method realizes more perceivable imputation and controllable fusion. Specifically, we model each missing data with a distribution conditioning on the available views and thus introducing uncertainty. Then an evidence-based fusion strategy is employed to guarantee the trustworthy integration of the imputed views. Extensive experiments are conducted on multiple benchmark data sets and our method establishes a state-of-the-art performance in terms of both performance and trustworthiness.Comment: CVP

    Semantic Equivariant Mixup

    Full text link
    Mixup is a well-established data augmentation technique, which can extend the training distribution and regularize the neural networks by creating ''mixed'' samples based on the label-equivariance assumption, i.e., a proportional mixup of the input data results in the corresponding labels being mixed in the same proportion. However, previous mixup variants may fail to exploit the label-independent information in mixed samples during training, which usually contains richer semantic information. To further release the power of mixup, we first improve the previous label-equivariance assumption by the semantic-equivariance assumption, which states that the proportional mixup of the input data should lead to the corresponding representation being mixed in the same proportion. Then a generic mixup regularization at the representation level is proposed, which can further regularize the model with the semantic information in mixed samples. At a high level, the proposed semantic equivariant mixup (sem) encourages the structure of the input data to be preserved in the representation space, i.e., the change of input will result in the obtained representation information changing in the same way. Different from previous mixup variants, which tend to over-focus on the label-related information, the proposed method aims to preserve richer semantic information in the input with semantic-equivariance assumption, thereby improving the robustness of the model against distribution shifts. We conduct extensive empirical studies and qualitative analyzes to demonstrate the effectiveness of our proposed method. The code of the manuscript is in the supplement.Comment: Under revie

    Rice Crop Height Inversion from TanDEM-X PolInSAR Data Using the RVoG Model Combined with the Logistic Growth Equation

    Get PDF
    The random volume over ground (RVoG) model has been widely used in the field of vegetation height retrieval based on polarimetric interferometric synthetic aperture radar (PolInSAR) data. However, to date, its application in a time-series framework has not been considered. In this study, the logistic growth equation was introduced into the PolInSAR method for the first time to assist in estimating crop height, and an improved inversion scheme for the corresponding RVoG model parameters combined with the logistic growth equation was proposed. This retrieval scheme was tested using a time series of single-pass HH-VV bistatic TanDEM-X data and reference data obtained over rice fields. The effectiveness of the time-series RVoG model based on the logistic growth equation and the convenience of using equation parameters to evaluate vegetation growth status were analyzed at three test plots. The results show that the improved method can effectively monitor the height variation of crops throughout the whole growth cycle and the rice height estimation achieved an accuracy better than when single dates were considered. This proved that the proposed method can reduce the dependence on interferometric sensitivity and can achieve the goal of monitoring the whole process of rice height evolution with only a few PolInSAR observations.This research was funded in part by the National Natural Science Foundation of China (grant nos. 41820104005, 42030112, 41904004) and in part by the and the Spanish Ministry of Science and Innovation (grant no. PID2020-117303GB-C22)

    Forest Height Inversion by Combining Single-Baseline TanDEM-X InSAR Data with External DTM Data

    Get PDF
    Forest canopy height estimation is essential for forest management and biomass estimation. In this study, we aimed to evaluate the capacity of TanDEM-X interferometric synthetic aperture radar (InSAR) data to estimate canopy height with the assistance of an external digital terrain model (DTM). A ground-to-volume ratio estimation model was proposed so that the canopy height could be precisely estimated from the random-volume-over-ground (RVoG) model. We also refined the RVoG inversion process with the relationship between the estimated penetration depth (PD) and the phase center height (PCH). The proposed method was tested by TanDEM-X InSAR data acquired over relatively homogenous coniferous forests (Teruel test site) and coniferous as well as broadleaved forests (La Rioja test site) in Spain. Comparing the TanDEM-X-derived height with the LiDAR-derived height at plots of size 50 m × 50 m, the root-mean-square error (RMSE) was 1.71 m (R2 = 0.88) in coniferous forests of Teruel and 1.97 m (R2 = 0.90) in La Rioja. To demonstrate the advantage of the proposed method, existing methods based on ignoring ground scattering contribution, fixing extinction, and assisting with simulated spaceborne LiDAR data were compared. The impacts of penetration and terrain slope on the RVoG inversion were also evaluated. The results show that when a DTM is available, the proposed method has the optimal performance on forest height estimation.This work was supported in part by the National Natural Science Foundation of China under Grant 41820104005, Grant 42030112, and Grant 41904004, Hunan Natural Science Foundation under Grant 2021JJ30808, and in part by the Spanish Ministry of Science and Innovation, Agencia Estatal de Investigacion, under Projects PID2020-117303GB-C22/AEI/10.13039/501100011033 and PROWARM (PID2020-118444GA-I00/AEI/10.13039/501100011033)

    GBG++: A Fast and Stable Granular Ball Generation Method for Classification

    Full text link
    Granular ball computing (GBC), as an efficient, robust, and scalable learning method, has become a popular research topic of granular computing. GBC includes two stages: granular ball generation (GBG) and multi-granularity learning based on the granular ball (GB). However, the stability and efficiency of existing GBG methods need to be further improved due to their strong dependence on kk-means or kk-division. In addition, GB-based classifiers only unilaterally consider the GB's geometric characteristics to construct classification rules, but the GB's quality is ignored. Therefore, in this paper, based on the attention mechanism, a fast and stable GBG (GBG++) method is proposed first. Specifically, the proposed GBG++ method only needs to calculate the distances from the data-driven center to the undivided samples when splitting each GB instead of randomly selecting the center and calculating the distances between it and all samples. Moreover, an outlier detection method is introduced to identify local outliers. Consequently, the GBG++ method can significantly improve effectiveness, robustness, and efficiency while being absolutely stable. Second, considering the influence of the sample size within the GB on the GB's quality, based on the GBG++ method, an improved GB-based kk-nearest neighbors algorithm (GBkkNN++) is presented, which can reduce misclassification at the class boundary. Finally, the experimental results indicate that the proposed method outperforms several existing GB-based classifiers and classical machine learning classifiers on 2424 public benchmark datasets

    Translational progress on tumor biomarkers

    Full text link
    There is an urgent need to apply basic research achievements to the clinic. In particular, mechanistic studies should be developed by bench researchers, depending upon clinical demands, in order to improve the survival and quality of life of cancer patients. To date, translational medicine has been addressed in cancer biology, particularly in the identification and characterization of novel tumor biomarkers. This review focuses on the recent achievements and clinical application prospects in tumor biomarkers based on translational medicine.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/115962/1/tca12294.pd
    corecore